It is crucial to choose the appropriate scale in order to build an effective and informational representation of a complex system. Scientists carefully choose the scales for their experiments to extract the variables that describe the causalities in the system. They found that the coarse scale(macro) is sometimes more causal and informative than the numerous-parameter observations(micro). The phenomenon that the causality emerges by coarse-graining is called Causal Emergence(CE). Based on information theory, a number of recent works quantitatively showed that CE indeed happens while coarse-graining a micro model to the macro. However, the existing works have not discussed the question of why and when the CE happens. We quantitatively analyze the redistribution of uncertainties for coarse-graining and suggest that the redistribution of uncertainties is the cause of causal emergence. We further analyze the thresholds that determine if CE happens or not. From the regularity of the transition probability matrix(TPM) of discrete systems, the mathematical expressions of the model properties are derived. The values of thresholds for different operations are computed. The results provide the critical and specific conditions of CE as helpful suggestions for choosing the proper coarse-graining operation. The results also provided a new way to better understand the nature of causality and causal emergence.
translated by 谷歌翻译
Quantum machine learning (QML) has received increasing attention due to its potential to outperform classical machine learning methods in various problems. A subclass of QML methods is quantum generative adversarial networks (QGANs) which have been studied as a quantum counterpart of classical GANs widely used in image manipulation and generation tasks. The existing work on QGANs is still limited to small-scale proof-of-concept examples based on images with significant down-scaling. Here we integrate classical and quantum techniques to propose a new hybrid quantum-classical GAN framework. We demonstrate its superior learning capabilities by generating $28 \times 28$ pixels grey-scale images without dimensionality reduction or classical pre/post-processing on multiple classes of the standard MNIST and Fashion MNIST datasets, which achieves comparable results to classical frameworks with 3 orders of magnitude less trainable generator parameters. To gain further insight into the working of our hybrid approach, we systematically explore the impact of its parameter space by varying the number of qubits, the size of image patches, the number of layers in the generator, the shape of the patches and the choice of prior distribution. Our results show that increasing the quantum generator size generally improves the learning capability of the network. The developed framework provides a foundation for future design of QGANs with optimal parameter set tailored for complex image generation tasks.
translated by 谷歌翻译
There exists unexplained diverse variation within the predefined colon cancer stages using only features either from genomics or histopathological whole slide images as prognostic factors. Unraveling this variation will bring about improved in staging and treatment outcome, hence motivated by the advancement of Deep Neural Network libraries and different structures and factors within some genomic dataset, we aggregate atypical patterns in histopathological images with diverse carcinogenic expression from mRNA, miRNA and DNA Methylation as an integrative input source into an ensemble deep neural network for colon cancer stages classification and samples stratification into low or high risk survival groups. The results of our Ensemble Deep Convolutional Neural Network model show an improved performance in stages classification on the integrated dataset. The fused input features return Area under curve Receiver Operating Characteristic curve (AUC ROC) of 0.95 compared with AUC ROC of 0.71 and 0.68 obtained when only genomics and images features are used for the stage's classification, respectively. Also, the extracted features were used to split the patients into low or high risk survival groups. Among the 2548 fused features, 1695 features showed a statistically significant survival probability differences between the two risk groups defined by the extracted features.
translated by 谷歌翻译
我们解决了从一般标记(例如电影海报)估计对应关系到捕获这种标记的图像的问题。通常,通过拟合基于稀疏特征匹配的同型模型来解决此问题。但是,他们只能处理类似平面的标记,而稀疏功能不能充分利用外观信息。在本文中,我们提出了一个新颖的框架神经标记器,训练神经网络估计在各种具有挑战性的条件下(例如标记变形,严格的照明等)估算密集标记的对应关系。此外,我们还提出了一种新颖的标记通信评估方法,对真实标记的注释进行了注释。 - 图像对并创建一个新的基准测试。我们表明,神经标记的表现明显优于以前的方法,并实现了新的有趣应用程序,包括增强现实(AR)和视频编辑。
translated by 谷歌翻译
市场需求紧迫,以最大程度地减少迅速伽马中子激活分析(PGNAA)光谱测量机的测试时间,以便它可以充当即时材料分析仪,例如立即对废物样品进行分类,并根据测试样品的检测成分确定最佳的回收方法。本文介绍了深度学习分类的新开发,并旨在减少PGNAA机器的测试时间。我们提出随机采样方法和类激活图(CAM)以生成“缩小”样品并连续训练CNN模型。随机采样方法(RSM)旨在减少样品中的测量时间,而类激活图(CAM)用于滤除缩小样品的不太重要的能量范围。我们将总PGNAA测量时间缩短到2.5秒,同时确保我们的数据集的精度约为96.88%,该数据集使用12种不同的物质。与分类不同的材料分类相比,具有相同元素以归档良好精度的物质需要更多的测试时间(样品计数率)。例如,铜合金的分类需要将近24秒的测试时间才能达到98%的精度。
translated by 谷歌翻译
在结果决策中使用机器学习模型通常会加剧社会不平等,特别是对种族和性别定义的边缘化群体成员产生不同的影响。 ROC曲线(AUC)下的区域被广泛用于评估机器学习中评分功能的性能,但与其他性能指标相比,在算法公平性中进行了研究。由于AUC的成对性质,定义基于AUC的组公平度量是成对依赖性的,并且可能涉及\ emph {group}和\ emph {group} aucs。重要的是,仅考虑一种AUC类别不足以减轻AUC优化的不公平性。在本文中,我们提出了一个最小值学习和偏置缓解框架,该框架既包含组内和组间AUC,同时保持实用性。基于这个Rawlsian框架,我们设计了一种有效的随机优化算法,并证明了其收敛到最小组级AUC。我们对合成数据集和现实数据集进行了数值实验,以验证Minimax框架的有效性和所提出的优化算法。
translated by 谷歌翻译
在医学成像中,表面注册广泛用于进行解剖结构之间的系统比较,其中一个很好的例子是高度复杂的脑皮质表面。为了获得有意义的注册,一种共同的方法是识别表面上的突出特征,并使用编码为具有里程碑意义的约束的特征对应关系来建立它们之间的较低距离映射。先前的注册工作主要集中于使用手动标记的地标并解决高度非线性优化问题,这些问题是耗时的,因此阻碍了实际应用。在这项工作中,我们提出了一个新的框架,用于使用准融合形式的几何形状和卷积神经网络自动地标检测和注册脑皮质表面。我们首先开发了一个具有里程碑意义的检测网络(LD-NET),该网络允许根据表面几何形状自动提取具有标志性的曲线的地标曲线。然后,我们利用检测到的地标和准符号理论来实现表面登记。具体而言,我们开发了一个系数预测网络(CP-NET),用于预测与所需地标的注册相关的Beltrami系数和一个称为磁盘Beltrami求解器网络(DBS-net)的映射网络,用于从预测的Quasi-grom-grom-groun Beltrami系数,具有准符号理论所保证的徒。提出了实验结果,以证明我们提出的框架的有效性。总的来说,我们的工作为基于表面的形态计算和医学形态分析铺平了一种新方法。
translated by 谷歌翻译
在各种基于学习的图像恢复任务(例如图像降解和图像超分辨率)中,降解表示形式被广泛用于建模降解过程并处理复杂的降解模式。但是,在基于学习的图像deblurring中,它们的探索程度较低,因为在现实世界中挑战性的情况下,模糊内核估计不能很好地表现。我们认为,对于图像降低的降解表示形式是特别必要的,因为模糊模式通常显示出比噪声模式或高频纹理更大的变化。在本文中,我们提出了一个框架来学习模糊图像的空间自适应降解表示。提出了一种新颖的联合图像re毁和脱蓝色的学习过程,以提高降解表示的表现力。为了使学习的降解表示有效地启动和降解,我们提出了一个多尺度退化注入网络(MSDI-NET),以将它们集成到神经网络中。通过集成,MSDI-NET可以适应各种复杂的模糊模式。 GoPro和Realblur数据集上的实验表明,我们提出的具有学识渊博的退化表示形式的Deblurring框架优于最先进的方法,具有吸引人的改进。该代码在https://github.com/dasongli1/learning_degradation上发布。
translated by 谷歌翻译
由于智能手机摄像机中配备了相对较小的传感器,通常在当今捕获的图像中通常存在高噪声,在这种情况下,噪声带来了有损图像压缩算法的额外挑战。如果没有能力分辨图像细节和噪声之间的差异,一般图像压缩方法分配了其他位,以在压缩过程中明确存储不需要的图像噪声,并在减压期间恢复不愉快的嘈杂图像。基于观察结果,我们优化图像压缩算法是噪声吸引的,因为关节降解和压缩以解决位不当分配问题。关键是要通过消除压缩过程中的不希望的噪声来将原始噪声图像转换为无噪声的位,以后将其作为干净的图像解压缩。具体而言,我们提出了一种新型的两分支,重量分担的架构,并具有插件功能Denoisers,以允许在几乎没有计算成本的情况下简单有效地实现目标。实验结果表明,我们的方法对合成数据集和现实数据集的现有基线方法有了显着改进。我们的源代码可从https://github.com/felixcheng97/denoisecompression获得。
translated by 谷歌翻译
现代语言模型利用越来越多的参数来实现自然语言理解任务的性能。在特定配置中完成这些模型以进行下游任务,甚至可以进一步改进性能。在本文中,我们对包装语言模型进行分析,并将单语言模型与在最终模型大小上大致相同的包装合奏进行比较。我们探索一系列模型包装配置,用于自然语言理解任务,最终合奏尺寸从300m参数到1.5B参数,并确定我们的结合方法最多大致相当于单个LM基线。我们注意到,根据我们的实验中的发现,例如降低方差和较小的绩效改善,在特定情况下进行包装和修剪的其他积极影响。
translated by 谷歌翻译